Deterministic Coherence Distillation
نویسندگان
چکیده
منابع مشابه
Deterministic entanglement distillation for secure double-server blind quantum computation
Blind quantum computation (BQC) provides an efficient method for the client who does not have enough sophisticated technology and knowledge to perform universal quantum computation. The single-server BQC protocol requires the client to have some minimum quantum ability, while the double-server BQC protocol makes the client's device completely classical, resorting to the pure and clean Bell stat...
متن کاملDeterministic coherence resonance in coupled chaotic oscillators with frequency mismatch.
A small mismatch between natural frequencies of unidirectionally coupled chaotic oscillators can induce coherence resonance in the slave oscillator for a certain coupling strength. This surprising phenomenon resembles "stabilization of chaos by chaos," i.e., the chaotic driving applied to the chaotic system makes its dynamics more regular when the natural frequency of the slave oscillator is a ...
متن کاملDynamical control of qubit coherence: Random versus deterministic schemes
We revisit the problem of switching off unwanted phase evolution and decoherence in a single two-state quantum system in the light of recent results on random dynamical decoupling methods [L. Viola and E. Knill, Phys. Rev. Lett. 94, 060502 (2005)]. A systematic comparison with standard cyclic decoupling is effected for a variety of dynamical regimes, including the case of both semiclassical and...
متن کاملStrong Cut-Elimination, Coherence, and Non-deterministic Semantics
An (n, k)-ary quantifier is a generalized logical connective, binding k variables and connecting n formulas. Canonical systems with (n, k)-ary quantifiers form a natural class of Gentzen-type systems which in addition to the standard axioms and structural rules have only logical rules in which exactly one occurrence of a quantifier is introduced. The semantics for these systems is provided usin...
متن کاملDropout distillation
Dropout is a popular stochastic regularization technique for deep neural networks that works by randomly dropping (i.e. zeroing) units from the network during training. This randomization process allows to implicitly train an ensemble of exponentially many networks sharing the same parametrization, which should be averaged at test time to deliver the final prediction. A typical workaround for t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Physical Review Letters
سال: 2019
ISSN: 0031-9007,1079-7114
DOI: 10.1103/physrevlett.123.070402